667 research outputs found

    pMIIND-an MPI-based population density simulation framework

    Get PDF
    MIIND [1] is the first publicly available implementation of population density algorithms. Like neural mass models, they model at the population level, rather than that of individual neurons, but unlike neural mass models, they consider the full neuronal state space. The central concept is a population density, a probability distribution function that represents the probability of a neuron being in a certain part of state space. Neurons will move through state space by their own intrinsic dynamics or driven by synaptic input. When individual spikes do not matter but only population averaged quantities are considered, these methods outperform direct simulations using neuron point models by a factor 10 or more, whilst (at the population level) producing identical results to simulations of spiking neurons. This is in general not true for neural mass models. Population density methods also relate closely to analytic evaluations of population dynamics. The evolution of the density function is given by a partial differential equation (PDE). In [3] a generic method was presented for solving this equation efficiently, both for small synaptic efficacies (diffusion limit; the PDE becomes a Fokker-Planck equation) and for large ones (finite jumps). We demonstrated that for leaky-integrate-and-fire (LIF) neurons this method reproduces analytic results [1] and uses of the order of 0.2 s to model 1s simulation time of infinitely large population of spiking LIF neurons. We now have developed this method to apply to any 1D neuron point model [3], not just LIF neurons and demonstrated the technique on quadratic-integrate-and-fire neurons. We are therefore in the position to model large heterogeneous networks of spiking neurons very efficiently. A potential bottleneck is MIIND's serial simulation loop. We developed an MPI implementation of MIIND's central simulation loop starting from a fresh code base, and addressed serialization, which is now done at the level of individual cores. Central assumption in the set up is that firing rates are communicated, not individual spikes, so bandwidth requirements are low. Latency is potentially a problem, but with the use of latency hiding techniques good scalability for up to 64 cores has been achieved ondedicated clusters. The scalability was verified with a simple model of cortical waves in a hexagonal network of populations with balanced excitation-inhibition. pMIIND is available on Sourceforge, through its git repository: git://http://miind.sourceforge.net A CMake-based install procedure is provided. Since pMIIND is set up as a C++ framework, it is possible to define one's own algorithms and still take advantage of the MPI-based simulation loop

    The state of MIIND

    Get PDF
    MIIND (Multiple Interacting Instantiations of Neural Dynamics) is a highly modular multi-level C++ framework, that aims to shorten the development time for models in Cognitive Neuroscience (CNS). It offers reusable code modules (libraries of classes and functions) aimed at solving problems that occur repeatedly in modelling, but tries not to impose a specific modelling philosophy or methodology. At the lowest level, it offers support for the implementation of sparse networks. For example, the library SparseImplementationLib supports sparse random networks and the library LayerMappingLib can be used for sparse regular networks of filter-like operators. The library DynamicLib, which builds on top of the library SparseImplementationLib, offers a generic framework for simulating network processes. Presently, several specific network process implementations are provided in MIIND: the Wilson–Cowan and Ornstein–Uhlenbeck type, and population density techniques for leaky-integrate-and-fire neurons driven by Poisson input. A design principle of MIIND is to support detailing: the refinement of an originally simple model into a form where more biological detail is included. Another design principle is extensibility: the reuse of an existing model in a larger, more extended one. One of the main uses of MIIND so far has been the instantiation of neural models of visual attention. Recently, we have added a library for implementing biologically-inspired models of artificial vision, such as HMAX and recent successors. In the long run we hope to be able to apply suitably adapted neuronal mechanisms of attention to these artificial models

    Combinatorial structures and processing in Neural Blackboard Architectures

    Get PDF
    We discuss and illustrate Neural Blackboard Architectures (NBAs) as the basis for variable binding and combinatorial processing the brain. We focus on the NBA for sentence structure. NBAs are based on the notion that conceptual representations are in situ, hence cannot be copied or transported. Novel combinatorial struc- tures can be formed with these representations by embedding them in NBAs. We discuss and illustrate the main characteristics of this form of combinatorial pro- cessing. We also illustrate the NBA for sentence structures by simulating neural activity as found in recently reported intracranial brain observations. Furthermore, we will show how the NBA can account for ambiguity resolution and garden path effects in sentence processing

    The necessity of connection structures in neural models of variable binding

    Get PDF
    In his review of neural binding problems, Feldman (Cogn Neurodyn 7:1-11, 2013) addressed two types of models as solutions of (novel) variable binding. The one type uses labels such as phase synchrony of activation. The other (‘connectivity based') type uses dedicated connections structures to achieve novel variable binding. Feldman argued that label (synchrony) based models are the only possible candidates to handle novel variable binding, whereas connectivity based models lack the flexibility required for that. We argue and illustrate that Feldman's analysis is incorrect. Contrary to his conclusion, connectivity based models are the only viable candidates for models of novel variable binding because they are the only type of models that can produce behavior. We will show that the label (synchrony) based models analyzed by Feldman are in fact examples of connectivity based models. Feldman's analysis that novel variable binding can be achieved without existing connection structures seems to result from analyzing the binding problem in a wrong frame of reference, in particular in an outside instead of the required inside frame of reference. Connectivity based models can be models of novel variable binding when they possess a connection structure that resembles a small-world network, as found in the brain. We will illustrate binding with this type of model with episode binding and the binding of words, including novel words, in sentence structures

    Population density equations for stochastic processes with memory kernels

    Get PDF
    We present a method for solving population density equations (PDEs)–-a mean-field technique describing homogeneous populations of uncoupled neurons—where the populations can be subject to non-Markov noise for arbitrary distributions of jump sizes. The method combines recent developments in two different disciplines that traditionally have had limited interaction: computational neuroscience and the theory of random networks. The method uses a geometric binning scheme, based on the method of characteristics, to capture the deterministic neurodynamics of the population, separating the deterministic and stochastic process cleanly. We can independently vary the choice of the deterministic model and the model for the stochastic process, leading to a highly modular numerical solution strategy. We demonstrate this by replacing the master equation implicit in many formulations of the PDE formalism by a generalization called the generalized Montroll-Weiss equation—a recent result from random network theory—describing a random walker subject to transitions realized by a non-Markovian process. We demonstrate the method for leaky- and quadratic-integrate and fire neurons subject to spike trains with Poisson and gamma-distributed interspike intervals. We are able to model jump responses for both models accurately to both excitatory and inhibitory input under the assumption that all inputs are generated by one renewal process
    • …
    corecore